9 research outputs found
Robust Dense Mapping for Large-Scale Dynamic Environments
We present a stereo-based dense mapping algorithm for large-scale dynamic
urban environments. In contrast to other existing methods, we simultaneously
reconstruct the static background, the moving objects, and the potentially
moving but currently stationary objects separately, which is desirable for
high-level mobile robotic tasks such as path planning in crowded environments.
We use both instance-aware semantic segmentation and sparse scene flow to
classify objects as either background, moving, or potentially moving, thereby
ensuring that the system is able to model objects with the potential to
transition from static to dynamic, such as parked cars. Given camera poses
estimated from visual odometry, both the background and the (potentially)
moving objects are reconstructed separately by fusing the depth maps computed
from the stereo input. In addition to visual odometry, sparse scene flow is
also used to estimate the 3D motions of the detected moving objects, in order
to reconstruct them accurately. A map pruning technique is further developed to
improve reconstruction accuracy and reduce memory consumption, leading to
increased scalability. We evaluate our system thoroughly on the well-known
KITTI dataset. Our system is capable of running on a PC at approximately 2.5Hz,
with the primary bottleneck being the instance-aware semantic segmentation,
which is a limitation we hope to address in future work. The source code is
available from the project website (http://andreibarsan.github.io/dynslam).Comment: Presented at IEEE International Conference on Robotics and Automation
(ICRA), 201
Exploiting Sparse Semantic HD Maps for Self-Driving Vehicle Localization
In this paper we propose a novel semantic localization algorithm that
exploits multiple sensors and has precision on the order of a few centimeters.
Our approach does not require detailed knowledge about the appearance of the
world, and our maps require orders of magnitude less storage than maps utilized
by traditional geometry- and LiDAR intensity-based localizers. This is
important as self-driving cars need to operate in large environments. Towards
this goal, we formulate the problem in a Bayesian filtering framework, and
exploit lanes, traffic signs, as well as vehicle dynamics to localize robustly
with respect to a sparse semantic map. We validate the effectiveness of our
method on a new highway dataset consisting of 312km of roads. Our experiments
show that the proposed approach is able to achieve 0.05m lateral accuracy and
1.12m longitudinal accuracy on average while taking up only 0.3% of the storage
required by previous LiDAR intensity-based approaches.Comment: 8 pages, 4 figures, 4 tables, 2019 IEEE/RSJ International Conference
on Intelligent Robots and Systems (IROS 2019
CADSim: Robust and Scalable in-the-wild 3D Reconstruction for Controllable Sensor Simulation
Realistic simulation is key to enabling safe and scalable development of %
self-driving vehicles. A core component is simulating the sensors so that the
entire autonomy system can be tested in simulation. Sensor simulation involves
modeling traffic participants, such as vehicles, with high quality appearance
and articulated geometry, and rendering them in real time. The self-driving
industry has typically employed artists to build these assets. However, this is
expensive, slow, and may not reflect reality. Instead, reconstructing assets
automatically from sensor data collected in the wild would provide a better
path to generating a diverse and large set with good real-world coverage.
Nevertheless, current reconstruction approaches struggle on in-the-wild sensor
data, due to its sparsity and noise. To tackle these issues, we present CADSim,
which combines part-aware object-class priors via a small set of CAD models
with differentiable rendering to automatically reconstruct vehicle geometry,
including articulated wheels, with high-quality appearance. Our experiments
show our method recovers more accurate shapes from sparse data compared to
existing approaches. Importantly, it also trains and renders efficiently. We
demonstrate our reconstructed vehicles in several applications, including
accurate testing of autonomy perception systems.Comment: CoRL 2022. Project page: https://waabi.ai/cadsim
The Need and Obstacles in Collaborative Engineering
The purpose of this paper is to explain the need for collaboration and to investigate the obstacles in collaborative engineering. The first part is about defining the term collaboration. The second part focuses on the information outside the domain of engineering which could prove to be valuable to this field as well. The final part shows different obstacles in collaborative engineering